23 research outputs found

    Logic Conditionals, Supervenience, and Selection Tasks

    Get PDF
    Principles of cognitive economy would require that concepts about objects, properties and relations should be introduced only if they simplify the conceptualisation of a domain. Unexpectedly, classic logic conditionals, specifying structures holding within elements of a formal conceptualisation, do not always satisfy this crucial principle. The paper argues that this requirement is captured by supervenience, hereby further identified as a property necessary for compression. The resulting theory suggests an alternative explanation of the empirical experiences observable in Wason's selection tasks, associating human performance with conditionals on the ability of dealing with compression, rather than with logic necessity

    From Probabilistic Programming to Complexity-based Programming

    Full text link
    The paper presents the main characteristics and a preliminary implementation of a novel computational framework named CompLog. Inspired by probabilistic programming systems like ProbLog, CompLog builds upon the inferential mechanisms proposed by Simplicity Theory, relying on the computation of two Kolmogorov complexities (here implemented as min-path searches via ASP programs) rather than probabilistic inference. The proposed system enables users to compute ex-post and ex-ante measures of unexpectedness of a certain situation, mapping respectively to posterior and prior subjective probabilities. The computation is based on the specification of world and mental models by means of causal and descriptive relations between predicates weighted by complexity. The paper illustrates a few examples of application: generating relevant descriptions, and providing alternative approaches to disjunction and to negation

    Three Conjectures on Unexpectedeness

    Full text link
    Unexpectedness is a central concept in Simplicity Theory, a theory of cognition relating various inferential processes to the computation of Kolmogorov complexities, rather than probabilities. Its predictive power has been confirmed by several experiments with human subjects, yet its theoretical basis remains largely unexplored: why does it work? This paper lays the groundwork for three theoretical conjectures. First, unexpectedness can be seen as a generalization of Bayes' rule. Second, the frequentist core of unexpectedness can be connected to the function of tracking ergodic properties of the world. Third, unexpectedness can be seen as constituent of various measures of divergence between the entropy of the world (environment) and the variety of the observer (system). The resulting framework hints to research directions that go beyond the division between probabilistic and logical approaches, potentially bringing new insights into the extraction of causal relations, and into the role of descriptive mechanisms in learning.Comment: Working pape

    The Role of Normware in Trustworthy and Explainable AI

    Get PDF
    For being potentially destructive, in practice incomprehensible and for the most unintelligible, contemporary technology is setting high challenges on our society. New conception methods are urgently required. Reorganizing ideas and discussions presented in AI and related fields, this position paper aims to highlight the importance of normware--that is, computational artifacts specifying norms--with respect to these issues, and argues for its irreducibility with respect to software by making explicit its neglected ecological dimension in the decision-making cycle

    Thirty years of Artificial Intelligence and Law:the second decade

    Get PDF
    The first issue of Artificial Intelligence and Law journal was published in 1992. This paper provides commentaries on nine significant papers drawn from the Journal’s second decade. Four of the papers relate to reasoning with legal cases, introducing contextual considerations, predicting outcomes on the basis of natural language descriptions of the cases, comparing different ways of representing cases, and formalising precedential reasoning. One introduces a method of analysing arguments that was to become very widely used in AI and Law, namely argumentation schemes. Two relate to ontologies for the representation of legal concepts and two take advantage of the increasing availability of legal corpora in this decade, to automate document summarisation and for the mining of arguments

    Legal linked data ecosystems and the rule of law

    Get PDF
    This chapter introduces the notions of meta-rule of law and socio-legal ecosystems to both foster and regulate linked democracy. It explores the way of stimulating innovative regulations and building a regulatory quadrant for the rule of law. The chapter summarises briefly (i) the notions of responsive, better and smart regulation; (ii) requirements for legal interchange languages (legal interoperability); (iii) and cognitive ecology approaches. It shows how the protections of the substantive rule of law can be embedded into the semantic languages of the web of data and reflects on the conditions that make possible their enactment and implementation as a socio-legal ecosystem. The chapter suggests in the end a reusable multi-levelled meta-model and four notions of legal validity: positive, composite, formal, and ecological

    MAPPING VALUE(S) IN AI: THE CASE OF YOUTUBE

    Get PDF
    This paper presents a multidisciplinary approach (media studies, computer science, and legal scholarship) for the analysis of systems that rely on AI components as central components of their design. Taking recommender systems more generally and the one built by YouTube more specifically, we develop a methodology for conceptualizing and studying the broad array of “ideas”, “norms”, or “values” such systems mobilize. Instead of limiting ourselves to a narrow understanding of these terms, we take into account, for example, translations from economic models, social theories, legal requirements, ethical principles, technical knowledge, experiential evaluations, or other constructs used to define and justify design goals and decisions that shape the production of technical objects and, consequently, the object themselves. In this paper we discuss three directions for analysis and present first results. Investigating technical knowledge includes the study of scholarly literature and experimentation with concrete objects such as LensKit for Python to understand the “ambient” knowledge and normativity engineers and designers draw on. Investigating local circumstances involves ethnographic analysis, but also the reconstruction of the business models and legal environment that weigh on YouTube’s design. Analyzing a system in use can draw on technical observation, via scraping or API data, of the actual dynamics of recommendation that emerge when users enter into the equation. Taken together, these three approaches can “encircle” the various moments where value(s) are shaped and put into work in the context of systems where direct access to specifications is improbable

    Mapping Value(s) in AI: Methodological Directions for Examining Normativity in Complex Technical Systems

    Get PDF
    This paper seeks to develop a multidisciplinary methodological framework and research agenda for studying the broad array of 'ideas', 'norms', or 'values' incorporated and mobilized in systems relying on AI components. We focus on recommender systems as a broader field of technical practice and take YouTube as an example of a concrete artifact that raises many social concerns. To situate the conceptual perspective and rationale informing our approach, we briefly discuss investigations into normativity in technology more broadly and refer to 'descriptive ethics' and 'ethigraphy' as two approaches concerned with the empirical study of values and norms. Drawing on science and technology studies, we argue that normativity cannot be reduced to ethics, but requires paying attention to a wider range of elements, including the performativity of material objects themselves. The method of 'encircling' is presented as a way to deal with both the secrecy surrounding many commercial systems and the socio-technical and distributed character of normativity more broadly. The resulting investigation aims to draw from a series of approaches and methods to construct a much wider picture than what could result from one discipline only. The paper is then dedicated to developing this methodological framework organized into three layers that demarcate specific avenues for conceptual reflection and empirical research, moving from the more general to the more concrete: ambient technical knowledge, local design conditions, and materialized values. We conclude by arguing that deontological approaches to normativity in AI need to take into account the many different ways norms and values are embedded in technical systems

    Qualifying Causes as Pertinent

    No full text
    International audience<p>Several computational methods have been proposed toevaluate the relevance of an instantiated cause to anobserved consequence. The paper reports on an experimentto investigate the adequacy of some of thesemethods as descriptors of human judgments aboutcausal relevance.</p
    corecore